Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed bug in computing map metric for images with either no ground tr… #884

Merged
merged 22 commits into from
Mar 21, 2022

Conversation

mtailanian
Copy link
Contributor

…uth or no prediction

What does this PR do?

Fixes #794

When either the prediction or the ground truth is empty, the image was not taken into account for the metric computation.
This PR fixes that, by returning proper values

Before submitting

  • Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure to update the docs?
  • Did you write any new necessary tests?

PR review

Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

Did you have fun?

Make sure you had fun coding 🙃

@codecov
Copy link

codecov bot commented Mar 10, 2022

Codecov Report

Merging #884 (9630e39) into master (f144425) will decrease coverage by 0%.
The diff coverage is 97%.

@@          Coverage Diff          @@
##           master   #884   +/-   ##
=====================================
- Coverage      95%    95%   -0%     
=====================================
  Files         171    171           
  Lines        7149   7181   +32     
=====================================
+ Hits         6801   6818   +17     
- Misses        348    363   +15     

@Borda Borda added the bug / fix Something isn't working label Mar 10, 2022
@mtailanian
Copy link
Contributor Author

Hello guys, just wanted to check if you're expecting some changes from my side? Maybe I'm missing something? I read the guidelines but I'm not used to making contributions. Please let me know how should we proceed.

@Borda
Copy link
Member

Borda commented Mar 18, 2022

@SkafteNicki or @twsl mind have look... 🐰 ^^

@twsl
Copy link
Contributor

twsl commented Mar 18, 2022

The change seems fine to me, but tests are missing. Mind adding them for validation, @mtailanian ?

@mtailanian
Copy link
Contributor Author

Hi @twsl, I just added some unit tests, and also fixed a bug that I encountered thanks to them :)
Let me know if that's alright now!
Thanks

@SkafteNicki
Copy link
Member

Hi @mtailanian could you please move your unit tests to this file: https://github.com/PyTorchLightning/metrics/blob/master/tests/detection/test_map.py

@mtailanian
Copy link
Contributor Author

Hi @SkafteNicki, I'm not sure I did what was expected, but I just added my unit tests to the specified file. Some of them were actually duplicated, so I just omitted them.

Thanks,
Matías

@mergify mergify bot added the ready label Mar 21, 2022
@SkafteNicki SkafteNicki enabled auto-merge (squash) March 21, 2022 17:27
@mergify mergify bot removed the ready label Mar 21, 2022
@SkafteNicki SkafteNicki added this to the v0.8 milestone Mar 21, 2022
@mergify mergify bot added the ready label Mar 21, 2022
@SkafteNicki SkafteNicki merged commit 7240bc9 into Lightning-AI:master Mar 21, 2022
Borda pushed a commit that referenced this pull request Mar 22, 2022
#884)

* Fixed bug in computing map metric for images with either no ground truth or no prediction

* fixed bug. added unit tests

* simple

* - main

* Tensor

* moved unit tests

* Delete test_map.py

* changelog

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Jirka <jirka.borovec@seznam.cz>
Co-authored-by: Nicki Skafte Detlefsen <skaftenicki@gmail.com>

(cherry picked from commit 7240bc9)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug / fix Something isn't working ready
Projects
None yet
Development

Successfully merging this pull request may close these issues.

MeanAveragePrecision incorrect calculations when predictions are empty
4 participants